On-o Markov Reward Models

نویسنده

  • Khalid Begain
چکیده

The analysis of Markov Reward Models with preemptive resume policy usually results in a double transform expression, whose solution is based on an inverse transformation, both in the time and in the reward variable domains. This paper discusses the case when the reward rates can be described only by 0 or positive c values. These on-o Markov Reward Models are analyzed and a symbolic solution is presented, from which numerical solution can be obtained by a computationally e ective method. The mean completion time and the probability distribution of states at the completion are evaluated.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Action selection for stochastic, delayed reward

The paper gives a novel account of quick decision making for maximising delayed reward in a stochastic world. The approach rests on observable operator models of stochastic systems, which generalize hidden Markov models. A particular kind of decision situations is outlined, and an algorithm is presented which allows to estimate the probability of future reward with a computational cost of only ...

متن کامل

An Effective Approach to the Completion Time Analysis of On-off Markov Reward Models

Analysis of Markov Reward Models (MRM) with preemptive resume (prs) policy usually results in a double transform expression, whose solution is based on the inverse transformations both in time and reward variable domain. This paper discusses the case when the reward rates can be either 0 or a positive value c. These systems are called on-o MRMs. We analyze the completion time of on-o MRMs and p...

متن کامل

Aggregation Methods for Markov Reward Chains with Fast and Silent Transitions

We analyze derivation of Markov reward chains from intermediate performance models that arise from formalisms for compositional performance analysis like stochastic process algebras, (generalized) stochastic Petri nets, etc. The intermediate models are typically extensions of continuous-time Markov reward chains with instantaneous labeled transitions. We give stochastic meaning to the intermedi...

متن کامل

Composite Performance and Dependability Analysis

Trivedi, K.S., J.M. Muppala, S.P. Woolet and B.R. Haverkort, Composite performance and dependability analysis, Performance Evaluation 14 (1992) 197-215. Composite performance and dependability analysis is gaining importance in the design of complex, fault-tolerant systems. Markov reward models are most commonly used for this purpose. In this paper, an introduction to Markov reward models includ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995